Goto

Collaborating Authors

 responsible ai principle


Towards an Operational Responsible AI Framework for Learning Analytics in Higher Education

Tirado, Alba Morales, Mulholland, Paul, Fernandez, Miriam

arXiv.org Artificial Intelligence

Universities are increasingly adopting data-driven strategies to enhance student success, with AI applications like Learning Analytics (LA) and Predictive Learning Analytics (PLA) playing a key role in identifying at-risk students, personalising learning, supporting teachers, and guiding educational decision-making. However, concerns are rising about potential harms these systems may pose, such as algorithmic biases leading to unequal support for minority students. While many have explored the need for Responsible AI in LA, existing works often lack practical guidance for how institutions can operationalise these principles. In this paper, we propose a novel Responsible AI framework tailored specifically to LA in Higher Education (HE). We started by mapping 11 established Responsible AI frameworks, including those by leading tech companies, to the context of LA in HE. This led to the identification of seven key principles such as transparency, fairness, and accountability. We then conducted a systematic review of the literature to understand how these principles have been applied in practice. Drawing from these findings, we present a novel framework that offers practical guidance to HE institutions and is designed to evolve with community input, ensuring its relevance as LA systems continue to develop.


Responsible AI principles from Microsoft

#artificialintelligence

We apply our responsible AI principles with guidance from committees that advise our leadership, engineering, and every team across the company. Learn how responsible AI governance is crucial to guiding AI innovation at Microsoft.


Microsoft axes team ensuring responsible AI principles reflected in products

#artificialintelligence

MANILA, Philippines – Microsoft has laid off a team set up to guide innovations in artificial intelligence (AI) that would be ethical, responsible and otherwise sustainable, according to a report on Platformer on Tuesday, March 14 (March 13, US time). These layoffs are part of a reorganization push announced in January, which would slash about 10,000 jobs. They also follow a multibillion-dollar investment by Microsoft into OpenAI. Microsoft said in a statement it was "committed to developing AI products and experiences safely and responsibly, and does so by investing in people, processes, and partnerships that prioritize this." "Over the past six years we have increased the number of people across our product teams and within the Office of Responsible AI who, along with all of us at Microsoft, are accountable for ensuring we put our AI principles into practice. While Microsoft still maintains its Office of Responsible AI, its ethics and society team was in charge of ensuring the company's responsible AI principles were reflected in the designs of products shipped.


How is fairness in AI calculated?

#artificialintelligence

What is AI fairness and why is it important? Fairness is a subjective term that is difficult to define in general. "an agreement between two or more parties about what constitutes acceptable behavior or the degree of impartiality that exists between two or more parties." When used in relation to AI systems, fairness explores the potential impact of AI systems' decision making on society and who decides how those decisions are going to be made. There are many definitions for what fairness means, but the most important one for AI is that it should not discriminate against any individual or group of people based on their race, gender, sexual orientation, age, etc. AI systems are designed to learn from data and their environment, so they can make more accurate decisions.


Can standards help a CIO address AI/ML risks? - IT World Canada

#artificialintelligence

As more and more organizations develop and implement Artificial Intelligence (AI) or Machine Learning (ML) applications, questions about the reliability of the results are increasing. Some high-profile AI/ML lapses risk giving this technology a bad name. The related media reports have created nervousness among CIOs and senior management. Real-world examples that have undermined society's confidence in AI/ML applications include: To avoid potentially thorny issues and headlines that damage the organization's reputation, CIOs and senior management need a way to assess the design and performance of their AI/ML applications. "Our members and other organizations have indicated that our standard has helped them incorporate responsible AI into their AI/ML applications," says Keith Jansa, the Executive Director of the CIO Strategy Council (CIOSC)." The CIOSC is a not-for-profit corporation providing a forum for members to transform, shape and influence the Canadian information and technology ecosystem, and is a Standards Development Organization (SDO) accredited by the Standards Council of Canada (SCC). "Our public and private sector members see value in our standards in part because of the strength of our process," says Keith Jansa. "We provide a neutral forum for standards development work using a consensus-based process that brings together a range of stakeholders and is accredited by the SCC." The CIOSC accreditation confers acceptance of the World Trade Organization (WTO) Technical Barriers to Trade (TBT) Annex 3 Code of Good Practice for the Preparation, Adoption and Application of Standards by Standardizing Bodies. That provides end-users assurance that the "Ethical design and use of automated decision systems" standard was developed using best practices."


4 principles for responsible AI

#artificialintelligence

Did you miss a session from the Future of Work Summit? Alejandro Saucedo is the engineering director at Seldon, and a chief scientist at the Institute for Ethical AI and Machine Learning, as well as the chair of the Linux Foundation's GPU Acceleration Committee. Artificial Intelligence (AI) is set to become ubiquitous over the coming decade – with the potential to upend our society in the process. Whether it be improved productivity, reduced costs or even the creation of new industries, the economic benefits of the technology are set to be colossal. In total, McKinsey estimates that AI will contribute more than $13 trillion to the global economy by 2030.


Navigate the road to Responsible AI - KDnuggets

#artificialintelligence

Find out how to implement AI responsibly-- Watch the recorded webinar video Responsible AI in Practice: learn about fairness, AI in the law, and AI security from experts. The use of machine learning (ML) applications has moved beyond the domains of academia and research into mainstream product development across industries looking to add artificial intelligence (AI) capabilities. Along with the increase in AI and ML applications is a growing interest in principles, tools, and best practices for deploying AI ethically and responsibly. In efforts to organize ethical, responsible tools and processes around a common collective, a number of names have been bandied about, including Ethical AI, Human Centered AI, and Responsible AI. Based on what we've seen in industry, several companies, including some major cloud providers, have focused on the term Responsible AI, and we'll do the same in this post.


Navigate the road to Responsible AI

#artificialintelligence

Find out how to implement AI responsibly--join our free webinar Responsible AI in Practice on December 15 to learn about fairness, AI in the law, and AI security from experts. The use of machine learning (ML) applications has moved beyond the domains of academia and research into mainstream product development across industries looking to add artificial intelligence (AI) capabilities. Along with the increase in AI and ML applications is a growing interest in principles, tools, and best practices for deploying AI ethically and responsibly. In efforts to organize ethical, responsible tools and processes around a common collective, a number of names have been bandied about, including Ethical AI, Human Centered AI, and Responsible AI. Based on what we've seen in industry, several companies, including some major cloud providers, have focused on the term Responsible AI, and we'll do the same in this post.


Responsible AI principles from Microsoft

#artificialintelligence

Designing AI to be trustworthy requires creating solutions that reflect ethical principles that are deeply rooted in important and timeless values. Learn how we are putting these principles into practice at Microsoft.